47 research outputs found

    From Interpreter to Compiler and Virtual Machine: A Functional Derivation

    Get PDF
    We show how to derive a compiler and a virtual machine from a compositional interpreter. We first illustrate the derivation with two evaluation functions and two normalization functions. We obtain Krivine's machine, Felleisen et al.'s CEK machine, and a generalization of these machines performing strong normalization, which is new. We observe that several existing compilers and virtual machines--e.g., the Categorical Abstract Machine (CAM), Schmidt's VEC machine, and Leroy's Zinc abstract machine--are already in derived form and we present the corresponding interpreter for the CAM and the VEC machine. We also consider Hannan and Miller's CLS machine and Landin's SECD machine. We derived Krivine's machine via a call-by-name CPS transformation and the CEK machine via a call-by-value CPS transformation. These two derivations hold both for an evaluation function and for a normalization function. They provide a non-trivial illustration of Reynolds's warning about the evaluation order of a meta-language

    Control-flow analysis of function calls and returns by abstract interpretation

    Get PDF
    International audienceAbstract interpretation techniques are used to derive a control-flow analysis for a simple higher-order functional language. The analysis approximates the interprocedural control-flow of both function calls and returns in the presence of first-class functions and tail-call optimization. The analysis is systematically derived by abstract interpretation of a stack-based abstract machine using a series of Galois connections. We prove that the analysis is equivalent to an analysis obtained by first transforming the program into continuation-passing style and then performing control flow analysis of the transfored program. We then show how the analysis induces an equivalent constraint-based formulation, thereby providing a rational reconstruction of a constraint-based CFA from abstract interpretation principles

    A Case Study in Modular Programming: Using AspectJ and OCaml in an Undergraduate Compiler Project

    Get PDF
    We report our experience in using two different languages to build the same software project. Specifically, we have converted an entire undergraduate compiler course from using AspectJ, an aspect-oriented language, to using OCaml, a functional language. The course has evolved over a period of eight years with, on average, 60 students completing it every year. In this article, we analyze our usage of the two programming languages and we compare and contrast the two software projects on a number of parameters, including how they enable students to write and test individual compiler phases in a modular way.

    Liveness-Driven Random Program Generation

    Get PDF
    Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers
    corecore